Goto

Collaborating Authors

 dx 1


Theoretical guarantees in KL for Diffusion Flow Matching

Neural Information Processing Systems

A significant task in statistics and machine learning currently revolves around generating samples from a target distribution that is only accessible via a dataset.



A Closed-Form Framework for Schrödinger Bridges Between Arbitrary Densities

Huang, Hanwen

arXiv.org Machine Learning

Score-based generative models have recently attracted significant attention for their ability to generate high-fidelity data by learning maps from simple Gaussian priors to complex data distributions. A natural generalization of this idea to transformations between arbitrary probability distributions leads to the Schrödinger Bridge (SB) problem. However, SB solutions rarely admit closed-form expressios and are commonly obtained through iterative stochastic simulation procedures, which are computationally intensive and can be unstable. In this work, we introduce a unified closed-form framework for representing the stochastic dynamics of SB systems. Our formulation subsumes previously known analytical solutions including the Schrödinger Föllmer process and the Gaussian SB as specific instances. Notably, the classical Gaussian SB solution, previously derived using substantially more sophisticated tools such as Riemannian geometry and generator theory, follows directly from our formulation as an immediate corollary. Leveraging this framework, we develop a simulation-free algorithm that infers SB dynamics directly from samples of the source and target distributions. We demonstrate the versatility of our approach in two settings: (i) modeling developmental trajectories in single-cell genomics and (ii) solving image restoration tasks such as inpainting and deblurring. This work opens a new direction for efficient and scalable nonlinear diffusion modeling across scientific and machine learning applications.




Tester-Learners for Halfspaces: Universal Algorithms

Neural Information Processing Systems

In the testable learning model, the learning algorithm, or tester-learner, is given access to labeled examples from an unknown distribution and may either reject or accept the unknown distribution. If it accepts, it must successfully produce a near-optimal hypothesis.


On the Equivalence of Optimal Transport Problem and Action Matching with Optimal Vector Fields

Kornilov, Nikita, Korotin, Alexander

arXiv.org Machine Learning

Flow Matching (FM) method in generative modeling maps arbitrary probability distributions by constructing an interpolation between them and then learning the vector field that defines ODE for this interpolation. Recently, it was shown that FM can be modified to map distributions optimally in terms of the quadratic cost function for any initial interpolation. To achieve this, only specific optimal vector fields, which are typical for solutions of Optimal Transport (OT) problems, need to be considered during FM loss minimization. In this note, we show that considering only optimal vector fields can lead to OT in another approach: Action Matching (AM). Unlike FM, which learns a vector field for a manually chosen interpolation between given distributions, AM learns the vector field that defines ODE for an entire given sequence of distributions.


Unfolding Generative Flows with Koopman Operators: Fast and Interpretable Sampling

Turan, Erkan, Siozopoulos, Aristotelis, Martinez, Louis, Gaubil, Julien, Pierson, Emery, Ovsjanikov, Maks

arXiv.org Artificial Intelligence

Continuous Normalizing Flows (CNFs) enable elegant generative modeling but remain bottlenecked by slow sampling: producing a single sample requires solving a nonlinear ODE with hundreds of function evaluations. Recent approaches such as Rectified Flow and OT-CFM accelerate sampling by straightening trajectories, yet the learned dynamics remain nonlinear black boxes, limiting both efficiency and interpretability. We propose a fundamentally different perspective: globally linearizing flow dynamics via Koopman theory. By lifting Conditional Flow Matching (CFM) into a higher-dimensional Koopman space, we represent its evolution with a single linear operator. This yields two key benefits. First, sampling becomes one-step and parallelizable, computed in closed form via the matrix exponential. Second, the Koopman operator provides a spectral blueprint of generation, enabling novel interpretability through its eigenvalues and modes. We derive a practical, simulation-free training objective that enforces infinitesimal consistency with the teacher's dynamics and show that this alignment preserves fidelity along the full generative path, distinguishing our method from boundary-only distillation. Empirically, our approach achieves competitive sample quality with dramatic speedups, while uniquely enabling spectral analysis of generative flows.


Quantizer Design for Finite Model Approximations, Model Learning, and Quantized Q-Learning for MDPs with Unbounded Spaces

Bicer, Osman, Kara, Ali D., Yuksel, Serdar

arXiv.org Artificial Intelligence

In this paper, for Markov decision processes (MDPs) with unbounded state spaces we present refined upper bounds presented in [Kara et. al. JMLR'23] on finite model approximation errors via optimizing the quantizers used for finite model approximations. We also consider implications on quantizer design for quantized Q-learning and empirical model learning, and the performance of policies obtained via Q-learning where the quantized state is treated as the state itself. We highlight the distinctions between planning, where approximating MDPs can be independently designed, and learning (either via Q-learning or empirical model learning), where approximating MDPs are restricted to be defined by invariant measures of Markov chains under exploration policies, leading to significant subtleties on quantizer design performance, even though asymptotic near optimality can be established under both setups. In particular, under Lyapunov growth conditions, we obtain explicit upper bounds which decay to zero as the number of bins approaches infinity